Understand how Agents use tools
Understand how you can create custom tools for agents
Provide enough information so you can understand what third party tools are doing
Provide a working example
Source: Korinek (2025)
Source: Korinek (2025)
🔍 What are these new tools?
AI Agents are mostly autonomous systems built on LLMs by giving them access to tools that can:
🎯 Manage context, e.g. using memory to keep track of previous steps
📝 Create detailed plans
🛠️ Use various tools (e.g. web search, code execution, data analysis, write reports) to decide to add context
🔄 Execute multi-step tasks
Orchestrate tasks between specialized agents
Source: Korinek (2025)
Model Context Protocal (MCP) were developed by anthropic to provide additional context to the LLM
Expose additional information for the LLM to use
LLMs discover what tools are available and decide how to use them
Tools should focus on high-level capability that aligns with a user’s intent, rather then a thin wrapper for an API
The metadata associated with each tool—its name, title, description, and inputSchema—forms a “semantic contract” which impact how the LLM uses the tool
https://modelcontextprotocol.io/docs/getting-started/intro the doc site is an excellent place to start and much of the material is pulled from this source
MCP are made up three parts
MCP Host, this is the interface; e.g. could be a chat interface or code
MCP client: they interact between the LLM and the tools.
MCP server: Tool provides actions and descriptions of capabilities
Source: MCP
When you submit a query:
1. The client gets the list of available tools from the server
2. Your query is sent to the LLM, lets say Claude, along with tool descriptions
4. The client executes any requested tool calls through the server
5. Results are sent back to Claude
6. Claude provides a natural language response
7. The response is displayed to you
Source: MCP
Source: MCP
tools and resources,eg the dataGoal is to explore setting up a remove MCP server and connecting to a chatbot on pophive website
We will build a local instance to show how it works
Will rely on the MCP SDK python library - https://github.com/modelcontextprotocol/python-sdk
Next you create function with the `@MCP.tool() dectorator
The name of the tool and the description are used by the LLM to decide how to use the tool
Note that the server has access to the data found by DATA_DIR, you could have other connections to data here
Since this example uses Claude Desktop
update the ~/Library/Application Support/Claude/claude_desktop_config.json file with the following
This gives claude access to a new tool
{
"mcpServers": {
"data-visualization": {
"command": "/Users/mad265/git-pub/dissc-agent-tooling/tasks/data-visualization-mcp/venv/bin/python",
"args": [
"/Users/mad265/git-pub/dissc-agent-tooling/tasks/data-visualization-mcp/data_viz_server.py"
],
"cwd": "/Users/mad265/git-pub/dissc-agent-tooling/tasks/data-visualization-mcp",
"env": {
"DATA_DIR": "/Users/mad265/git-pub/dissc-agent-tooling/data"
}
}
}
}
Worth mentioning that MCP servers can be hosted
Additional security and cost need to be considered when coding this
MCP is a way for an LLM to decide to add context to a task based on the prompt given
RAG is a way for an LLM to add context to a task based on the prompt given
MCP is more flexible and can be used to add context to a task based on the prompt given
RAG is more rigid and can only be used to add context to a task based on the prompt given
Developing tools for an LLM to use is the first step in developing modular agents
Encourage people to think about breaking down tasks into smaller components so both tools and agents can be reused
Agent2Agent communication is one way to achieve this, in that it allows you to abstract to the agent level allowing agent to discover what each agent can do